6 research outputs found

    Probabilistic graph formalisms for meaning representations

    Get PDF
    In recent years, many datasets have become available that represent natural language semantics as graphs. To use these datasets in natural language processing (NLP), we require probabilistic models of graphs. Finite-state models have been very successful for NLP tasks on strings and trees because they are probabilistic and composable. Are there equivalent models for graphs? In this thesis, we survey several graph formalisms, focusing on whether they are probabilistic and composable, and we contribute several new results. In particular, we study the directed acyclic graph automata languages (DAGAL), the monadic second-order graph languages (MSOGL), and the hyperedge replacement languages (HRL). We prove that DAGAL cannot be made probabilistic, we explain why MSOGL also most likely cannot be made probabilistic, and we review the fact that HRL are not composable. We then review a subfamily of HRL and MSOGL: the regular graph languages (RGL; Courcelle 1991), which have not been widely studied, and particularly have not been studied in an NLP context. Although Courcelle (1991) only sketches a proof, we present a full, more NLP-accessible proof that RGL are a subfamily of MSOGL. We prove that RGL are probabilistic and composable, and we provide a novel Earley-style parsing algorithm for them that runs in time linear in the size of the input graph. We compare RGL to two other new formalisms: the restricted DAG languages (RDL; Bj¨orklund et al. 2016) and the tree-like languages (TLL; Matheja et al. 2015). We show that RGL and RDL are incomparable; TLL and RDL are incomparable; and either RGL are incomparable to TLL, or RGL are contained within TLL. This thesis provides a clearer picture of this field from an NLP perspective, and suggests new theoretical and empirical research directions

    Semantic Graph Parsing with Recurrent Neural Network DAG Grammars

    Get PDF
    Semantic parses are directed acyclic graphs (DAGs), so semantic parsing should be modeled as graph prediction. But predicting graphs presents difficult technical challenges, so it is simpler and more common to predict the linearized graphs found in semantic parsing datasets using well-understood sequence models. The cost of this simplicity is that the predicted strings may not be well-formed graphs. We present recurrent neural network DAG grammars, a graph-aware sequence model that ensures only well-formed graphs while sidestepping many difficulties in graph prediction. We test our model on the Parallel Meaning Bank---a multilingual semantic graphbank. Our approach yields competitive results in English and establishes the first results for German, Italian and Dutch.Comment: 9 pages, to appear in EMNLP201

    Recurrent Neural Networks as Weighted Language Recognizers

    Get PDF
    We investigate the computational complexity of various problems for simple recurrent neural networks (RNNs) as formal models for recognizing weighted languages. We focus on the single-layer, ReLU-activation, rational-weight RNNs with softmax, which are commonly used in natural language processing applications. We show that most problems for such RNNs are undecidable, including consistency, equivalence, minimization, and the determination of the highest-weighted string. However, for consistent RNNs the last problem becomes decidable, although the solution length can surpass all computable bounds. If additionally the string is limited to polynomial length, the problem becomes NP-complete and APX-hard. In summary, this shows that approximations and heuristic algorithms are necessary in practical applications of those RNNs

    Parsing Graphs with Regular Graph Grammars

    Get PDF

    The problem with probabilistic DAG automata for semantic graphs

    Get PDF
    Semantic representations in the form of directed acyclic graphs (DAGs) have been introduced in recent years, and to model them, we need probabilistic models of DAGs. One model that has attracted some attention is the DAG automaton, but it has not been studied as a probabilistic model. We show that some DAG automata cannot be made into useful probabilistic models by the nearly universal strategy of assigning weights to transitions. The problem affects single-rooted, multi-rooted, and unbounded-degree variants of DAG automata, and appears to be pervasive. It does not affect planar variants, but these are problematic for other reasons.Comment: To appear in NAACL-HLT 201

    (Re)introducing regular graph languages

    Get PDF
    corecore